Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H\&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The image captioning task is typically realized by an auto-regressive method that decodes the text tokens one by one. We present a diffusion-based captioning model, dubbed the name DDCap, to allow more decoding flexibility. Unlike image generation, where the output is continuous and redundant with a fixed length, texts in image captions are categorical and short with varied lengths. Therefore, naively applying the discrete diffusion model to text decoding does not work well, as shown in our experiments. To address the performance gap, we propose several key techniques including best-first inference, concentrated attention mask, text length prediction, and image-free training. On COCO without additional caption pre-training, it achieves a CIDEr score of 117.8, which is +5.0 higher than the auto-regressive baseline with the same architecture in the controlled setting. It also performs +26.8 higher CIDEr score than the auto-regressive baseline (230.3 v.s.203.5) on a caption infilling task. With 4M vision-language pre-training images and the base-sized model, we reach a CIDEr score of 125.1 on COCO, which is competitive to the best well-developed auto-regressive frameworks. The code is available at https://github.com/buxiangzhiren/DDCap.
translated by 谷歌翻译
由于简单但有效的训练机制和出色的图像产生质量,生成的对抗网络(GAN)引起了极大的关注。具有生成照片现实的高分辨率(例如$ 1024 \ times1024 $)的能力,最近的GAN模型已大大缩小了生成的图像与真实图像之间的差距。因此,许多最近的作品表明,通过利用良好的潜在空间和博学的gan先验来利用预先训练的GAN模型的新兴兴趣。在本文中,我们简要回顾了从三个方面利用预先培训的大规模GAN模型的最新进展,即1)大规模生成对抗网络的培训,2)探索和理解预训练的GAN模型,以及预先培训的GAN模型,以及3)利用这些模型进行后续任务,例如图像恢复和编辑。有关相关方法和存储库的更多信息,请访问https://github.com/csmliu/pretretaining-gans。
translated by 谷歌翻译
人们对开发图像超分辨率(SR)算法的兴趣越来越大,该算法将低分辨率(LR)转换为更高分辨率的图像,但是自动评估超级分辨图像的视觉质量仍然是一个具有挑战性的问题。在这里,我们在确定性保真度(DF)与统计保真度(SF)的二维(2D)空间中查看SR图像质量评估(SR IQA)的问题。这使我们能够更好地理解现有SR算法的优势和缺点,这些算法在(DF,SF)的2D空间中在不同簇中产生图像。具体而言,我们观察到更传统的SR算法的一种有趣趋势,这些算法通常倾向于在失去SF的同时优化DF,以及最新的基于生成的对抗网络(GAN)的方法,相比之下,这些方法在实现高SF方面具有很强的优势,但有时在高SF方面表现出很强的优势维护DF。此外,我们提出了一个基于内容依赖性的清晰度和纹理评估的不确定性加权方案,将两种保真度措施合并为名为“超级分辨率图像保真度(SRIF)指数的总体质量预测”,这表明了与最新的绩效相对的卓越性能ART IQA模型对主题评级数据集进行测试。
translated by 谷歌翻译
对于语义引导的跨视图图像翻译,至关重要的是要了解从源视图图像进行示例像素以及在目标视图语义映射引导下对它们进行重新分配的位置,尤其是当源之间几乎没有重叠或急剧的视图差异时和目标图像。因此,一个不仅需要编码源视图图像和目标查看语义映射中的像素之间的长距离依赖关系,而且还需要转换这些学到的依赖关系。为此,我们提出了一个新颖的生成对抗网络Pi-Trans,该网络主要由一个新型的平行-CONVMLP模块和一个在多个语义级别上的隐式转换模块组成。广泛的实验结果表明,与两个具有挑战性的数据集中的最新方法相比,拟议的Pi-Trans通过较大的边缘实现了最佳的定性和定量性能。该代码将在https://github.com/amazingren/pi-trans上提供。
translated by 谷歌翻译
尽管在基于生成的对抗网络(GAN)的声音编码器中,该模型在MEL频谱图中生成原始波形,但在各种录音环境中为众多扬声器合成高保真音频仍然具有挑战性。在这项工作中,我们介绍了Bigvgan,这是一款通用的Vocoder,在零照片环境中在各种看不见的条件下都很好地概括了。我们将周期性的非线性和抗氧化表现引入到发电机中,这带来了波形合成所需的感应偏置,并显着提高了音频质量。根据我们改进的生成器和最先进的歧视器,我们以最大的规模训练我们的Gan Vocoder,最高到1.12亿个参数,这在文献中是前所未有的。特别是,我们识别并解决了该规模特定的训练不稳定性,同时保持高保真输出而不过度验证。我们的Bigvgan在各种分布场景中实现了最先进的零拍性能,包括新的扬声器,新颖语言,唱歌声音,音乐和乐器音频,在看不见的(甚至是嘈杂)的录制环境中。我们将在以下网址发布我们的代码和模型:https://github.com/nvidia/bigvgan
translated by 谷歌翻译
近年来,面部语义指导(包括面部地标,面部热图和面部解析图)和面部生成对抗网络(GAN)近年来已广泛用于盲面修复(BFR)。尽管现有的BFR方法在普通案例中取得了良好的性能,但这些解决方案在面对严重降解和姿势变化的图像时具有有限的弹性(例如,在现实世界情景中看起来右,左看,笑等)。在这项工作中,我们提出了一个精心设计的盲人面部修复网络,具有生成性面部先验。所提出的网络主要由非对称编解码器和stylegan2先验网络组成。在非对称编解码器中,我们采用混合的多路残留块(MMRB)来逐渐提取输入图像的弱纹理特征,从而可以更好地保留原始面部特征并避免过多的幻想。 MMRB也可以在其他网络中插入插件。此外,多亏了StyleGAN2模型的富裕和多样化的面部先验,我们采用了微调的方法来灵活地恢复自然和现实的面部细节。此外,一种新颖的自我监督训练策略是专门设计用于面部修复任务的,以使分配更接近目标并保持训练稳定性。关于合成和现实世界数据集的广泛实验表明,我们的模型在面部恢复和面部超分辨率任务方面取得了卓越的表现。
translated by 谷歌翻译
利用通用神经结构来替代手动设计或感应偏见,最近引起了广泛的兴趣。但是,现有的跟踪方法依赖于定制的子模块,需要进行架构选择的先验知识,从而阻碍了更通用系统中的跟踪开发。本文通过利用变压器主链进行关节特征提取和交互来提供简化的跟踪体系结构(SIMTRACK)。与现有的暹罗跟踪器不同,我们将输入图像序列化,并在单支骨架上直接串联。主链中的特征相互作用有助于删除精心设计的交互模块并产生更有效的框架。为了减少视觉变压器中的减速采样的信息丢失,我们进一步提出了动脉窗口策略,以可接受的计算成本提供更多多样化的输入补丁。我们的SimTrack在Lasot/TNL2K上以2.5%/2.6%的AUC增益提高了基线,并获得了与其他没有铃铛和哨声的其他专业跟踪算法竞争的结果。
translated by 谷歌翻译
类别不平衡发生在许多实际应用程序中,包括图像分类,其中每个类中的图像数量显着不同。通过不平衡数据,生成的对抗网络(GANS)倾向于多数类样本。最近的两个方法,平衡GaN(Bagan)和改进的Bagan(Bagan-GP)被提出为增强工具来处理此问题并将余额恢复到数据。前者以无人监督的方式预先训练自动化器权重。但是,当来自不同类别的图像具有类似的特征时,它是不稳定的。后者通过促进监督的自动化培训培训,基于蒲甘进行改善,但预先培训偏向于多数阶级。在这项工作中,我们提出了一种新颖的条件变形式自动化器,具有用于生成的对抗性网络(CAPAN)的平衡训练,作为生成现实合成图像的增强工具。特别是,我们利用条件卷积改变自动化器,为GaN初始化和梯度惩罚培训提供了监督和平衡的预培训。我们所提出的方法在高度不平衡版本的MNIST,时尚 - MNIST,CIFAR-10和两个医学成像数据集中呈现出卓越的性能。我们的方法可以在FR \'回路截止距离,结构相似性指数测量和感知质量方面综合高质量的少数民族样本。
translated by 谷歌翻译